Abstract
O
# Seed for random number generation
set.seed(42)
knitr::opts_chunk$set(cache.extra = knitr::rand_seed)
#Research Questions
Does variation in financial agencies’ independence affect congressional success in rulemaking?
How does variation in the number of congressional coalitions affect congressional success in financial agency rulemaking?
Does either conformity or deviation from the party of the President affect congressional success?
Do members of Congress increasingly utilize partisan rhetoric over time to influence success within financial agency rulemaking? - Does variation in partisan rhetoric affect congressional success?
#Abstract (125 words) The outcomes to federal rulemaking proposals in response to congressional lobbying is integral towards understanding how government institutions work. However, federal agencies vary in level of independence, as measured by how responsive they are to public demands. Previous studies have shown that federal agency rulemaking, especially within the financial divisions, has become more independent over time, with agencies becoming less responsive to public comments. Using data from prior studies conducted by Dr. Jennifer Selin and Dr. Devin Judge-Lord, I seek to study whether this pattern can be seen within the congressional sphere — i.e. if financial federal agencies are responsive to congressmembers’ demands. I also attempt to see if there may be any change of influence for those within the same or opposing party of the President.
I will gather data from 40 rules issued by financial agencies, including but not limited to the Consumer Financial Protection Bureau, the Treasury Department, and the Office of the Comptroller of the Currency. I focus on public comments issued by members of Congress and focus on whether their demands were ultimately fulfilled in the final rule. The research design additionally combines data from a previous study conducted by Jennifer L. Selin, who determined an estimate of the level of structural independence for several agencies during the Obama administration. My study will also include rules during the subsequent Trump administration, wherein the Senate majority switched to the Republican party. I will compare and contrast rules from both eras to determine if influence from political parties have grown over time, including determining if there is an increased presence of politicians commenting on policies. I will also use textual analysis in addition to the hand coding to see if elected officials use more partisan rhetoric over time. In this study, “partisan rhetoric” refers to an individual making positive assertions about their own party or negative assertions about the opposing party. I will also examine whether the demands of elected officials that are the same party as the President are increasingly met over those of the opposing party.
Does either conformity or deviation from the party of the President affect congressional success?
One measure of agency independence may be examined by studying the effects of congressmember’s party affiliation and whether it conforms with the President’s party affiliation.
We have the following data for 10 congressmembers:
Success: whether agencies fullfilled Congressmembers’ demands, measured between -2 and 2.
Party: “1” for if the party is the same for the congressmember and the President and “0” if the party is different between the congressmember and the President.
Independence: The estimates of agency independence, derived from Selin’s data.
Coalitions : sizes of coalitions that the congressmember is affiliated with.
Table @ref(tab:data-sim) shows ten rows of simulated data.
library(tidyverse)
library(tibble)
library(msm)
## Warning: package 'msm' was built under R version 4.1.1
library(kableExtra)
##
## Attaching package: 'kableExtra'
## The following object is masked from 'package:dplyr':
##
## group_rows
congressional_success <- sample(x = c(-2, -1, 0, 1, 2), 1000, prob = c(0.1, 0.3, .1, 0.4, 0.1), replace = T)
d = tibble(rule_id = c(1:1000, rep(1001:1500, 2)),
congress_id = sample(1:2000),
coalitions = c(rep(1, 1000), rep(2, 1000)),
congressional_success = c(congressional_success, sort(congressional_success)),
coalition_size = rtnorm(1000, mean = 5, sd= 10, lower = 1) %>% rep(2) %>% round(),
party_affiliation = sample(x = c(0,1), 2000, replace = T, prob = c(0.7, .3)),
independence = sample(x = c(4.1,1.643,0.174,0.218,2.269), 2000, replace = T),
comments= c(rtnorm(1000, mean = 10000, sd = 100000, lower = 100), rep(1, 1000)) %>%
sample() %>% round() ,
cong_support = c(rtnorm(1000, mean = 1, sd = 5, lower = 0), rep(0, 1000)) %>% sample() %>% round() )
d %>% sample_n(10) %>% dplyr::select(rule_id, congress_id, everything())
## # A tibble: 10 × 9
## rule_id congress_id coalitions congressional_success coalition_size
## <int> <int> <dbl> <dbl> <dbl>
## 1 1317 1583 2 1 14
## 2 305 1406 1 2 34
## 3 988 1113 1 1 2
## 4 678 683 1 1 21
## 5 346 1970 1 1 8
## 6 434 407 1 1 6
## 7 918 332 1 -1 7
## 8 941 642 1 -1 13
## 9 289 1225 1 1 17
## 10 185 265 1 0 12
## # … with 4 more variables: party_affiliation <dbl>, independence <dbl>,
## # comments <dbl>, cong_support <dbl>
H1: Members of Congress of the same party as the President have their demands fullfilled more often than congressmembers that are of a different politcal party. That is, the relationship between congressional success and agency independence differs by party affiliation.
H0: There is no difference in congressional success for members of Congress of the same of differing party of the President. That is, the relationship between congressional and agency independence does not differ by party affiliation.
The dependent variable is congressional success. For congressmember \(i\), let congressional success be \(y_i\) in the model \(y_i = \beta_0 + ... + \epsilon_i\). \(\beta_0\) is the predicted salary, \(\hat{y}\), when all other variables in the model are 0.
Does the model, \(y_i = \beta_0 + \beta_1*party_i + \epsilon_i\), test the relationship of interest?
model <- lm(congressional_success ~ party_affiliation, data = d)
m <- model %>%
tidy(conf.int = TRUE)
m
## # A tibble: 2 × 7
## term estimate std.error statistic p.value conf.low conf.high
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 0.0860 0.0322 2.67 0.00770 0.0228 0.149
## 2 party_affiliation -0.0104 0.0604 -0.172 0.864 -0.129 0.108
ggplot(m %>% filter(term != "(Intercept)")) +
aes(x = term,
y = estimate,
ymin = conf.low,
ymax = conf.high) +
geom_pointrange() +
geom_hline(yintercept = 0, color = "grey") +
coord_flip() +
labs(x="", y="OLS Estimate")
# scatterplots
ggplot(d) +
aes(x = party_affiliation, y = congressional_success) +
geom_jitter(aes(alpha = coalition_size))
p <- ggplot(d) +
aes(x = party_affiliation, y = congressional_success) +
geom_jitter(aes(alpha = coalition_size))
p
# illustrating with yhat formula; more easily done with augment()
b0 <- m$estimate[1]
b1 <- m$estimate[2]
p +
geom_line(aes(color = "Deviation", # yhat for opposition party
y = b0 + b1*1) ) +
geom_line(aes(color = "Conformity", # yhat for same party
y = b0 + b1*0) ) +
geom_ribbon(aes(ymax = b0 + b1*1,
ymin = b0 + b1*0), alpha = .1, color = NA)
t-test (compare model output to simple t-test of difference in mean congressional success by party).
# tidy model output object
m
## # A tibble: 2 × 7
## term estimate std.error statistic p.value conf.low conf.high
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 0.0860 0.0322 2.67 0.00770 0.0228 0.149
## 2 party_affiliation -0.0104 0.0604 -0.172 0.864 -0.129 0.108
# t-test
t.test(congressional_success ~ party_affiliation, data = d) %>% tidy()
## # A tibble: 1 × 10
## estimate estimate1 estimate2 statistic p.value parameter conf.low conf.high
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.0104 0.0860 0.0756 0.170 0.865 1018. -0.110 0.130
## # … with 2 more variables: method <chr>, alternative <chr>
ggplot(d, aes(x = congressional_success)) + geom_histogram()+ labs(x = "Congressional Success")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
ggplot(d, aes(x = independence)) + geom_histogram()+ labs(x = "Independence")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
ggplot(d, aes(x = coalition_size)) + geom_histogram()+ labs(x = "Coalition size")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
\(y_i = \beta_0 + \beta_1*party_i + \beta_2*independence_i + \epsilon_i\)
model_1 <- lm(congressional_success ~ independence + party_affiliation, data = d)
augment()
## # A tibble: 0 × 0
summary(model_1)
##
## Call:
## lm(formula = congressional_success ~ independence + party_affiliation,
## data = d)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.1305 -1.0849 0.8695 0.9286 1.9516
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.05437 0.04534 1.199 0.231
## independence 0.01857 0.01876 0.990 0.322
## party_affiliation -0.00919 0.06042 -0.152 0.879
##
## Residual standard error: 1.219 on 1997 degrees of freedom
## Multiple R-squared: 0.0005056, Adjusted R-squared: -0.0004954
## F-statistic: 0.5051 on 2 and 1997 DF, p-value: 0.6035
m1 <- model_1 %>%
tidy(conf.int = TRUE)
m1
## # A tibble: 3 × 7
## term estimate std.error statistic p.value conf.low conf.high
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 0.0544 0.0453 1.20 0.231 -0.0345 0.143
## 2 independence 0.0186 0.0188 0.990 0.322 -0.0182 0.0554
## 3 party_affiliation -0.00919 0.0604 -0.152 0.879 -0.128 0.109
ggplot(m1 %>% filter(term != "(Intercept)")) +
aes(x = term,
y = estimate,
ymin = conf.low,
ymax = conf.high) +
geom_pointrange() +
geom_hline(yintercept = 0, color = "grey") +
coord_flip() +
labs(x="", y="OLS Estimate")
# illustrating with yhat formula; more easily done with augment()
b0 <- m1$estimate[1]
b1 <- m1$estimate[2]
b2 <- m1$estimate[3]
p +
geom_line(aes(color = "Deviation", # yhat for opposition party
y = b0 + b1*1 + b2*party_affiliation) ) +
geom_line(aes(color = "Conformity", # yhat for same party
y = b0 + b1*0 + b2*party_affiliation) ) +
geom_ribbon(aes(ymax = b0 + b1*1+ b2*party_affiliation,
ymin = b0 + b1*0+ b2*party_affiliation), alpha = .1, color = NA)
Let’s also plot the residuals. Aside from interpretation, we want to know where our model is a better or worse fit with the data, especially if residuals seem to vary systematically over the range of our data.
augment computes tidy residuals, among other cool things.
m1 <- augment(model_1)
p +
geom_line(aes(y = m1$.fitted)) + # with .fitted from augment()
geom_point(aes(y = m1$.fitted), shape = 1, alpha = .2) + # with .fitted from augment()
geom_segment(aes(xend = independence, yend = m1$.fitted ), alpha = .2, size = 2)
ggplot(m1) +
aes(y = .resid, x = party_affiliation) +
geom_point(aes(color = party_affiliation)) +
scale_color_viridis_c() +
## to show how risiduals are the distance between an
## observation and the regression line:
geom_hline(yintercept = 0, color = "dark grey") +
geom_text(x= mean(m1$independence), y = 0,
label = "Regression line") +
#geom_col(aes(fill = party_affiliation), alpha = .2, position ="identity") +
## + labels:
labs(title = "Residuals (Observed - Predicted Independence)",
y = "Residuals")
glance(model_1)
## # A tibble: 1 × 12
## r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.000506 -0.000495 1.22 0.505 0.604 2 -3232. 6472. 6495.
## # … with 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>
Unsurprisingly this model yields no significant results (Figure @ref(fig:model-success-plot-sim), Table @ref(tab:mediation-sim)). With lobbying success as the dependent variable, the coefficient on the main variable of interest would be interpreted as a \(\beta_{logmasscomments}\) increase in the five-point influence scale of lobbying success for each one-unit increase in the logged number of comments.
library(googlesheets4)
#FIXME move this into another script updating the comment data from google sheet, so you don't need to ping google sheets every time you knit
comments <- read_sheet("https://docs.google.com/spreadsheets/d/1HBjG32qWVdf9YxfGPEJhNmSw65Z9XzPhHdDbLnc3mYc/edit?usp=sharing")
## ! Using an auto-discovered, cached token.
## To suppress this message, modify your code or options to clearly consent to
## the use of a cached token.
## See gargle's "Non-interactive auth" vignette for more details:
## <https://gargle.r-lib.org/articles/non-interactive-auth.html>
## ℹ The googlesheets4 package is using a cached token for 'devin.jl@gmail.com'.
## ✓ Reading from "comments_congress".
## ✓ Range 'comments_congress_clean'.
## Warning in .Primitive("as.double")(x, ...): NAs introduced by coercion
## Warning in .Primitive("as.double")(x, ...): NAs introduced by coercion
## Warning in .Primitive("as.double")(x, ...): NAs introduced by coercion
## New names:
## * `` -> ...1
## * `` -> ...40
load(here("data/members.Rdata"))
agency_data <- read_excel(here("data/federal_agencies_estimate.xlsx"))
#FIXME
members %<>%
mutate(nominate_pres = case_when(
congress < 111 ~ 0.693,
congress >111 & congress <115 ~ -0.358,
congress > 115 & congress <117 ~ 0.403,
congress > 117 ~ -0.320))
# LOOK AT THIS:
members %>%
distinct(congress, nominate_pres) %>%
arrange(congress)
## # A tibble: 12 × 2
## congress nominate_pres
## <int> <dbl>
## 1 105 0.693
## 2 106 0.693
## 3 107 0.693
## 4 108 0.693
## 5 109 0.693
## 6 110 0.693
## 7 111 NA
## 8 112 -0.358
## 9 113 -0.358
## 10 114 -0.358
## 11 115 NA
## 12 116 0.403
# Make ideological distance from president variable
members %<>% mutate(
nominate_diff = nominate.dim1-nominate_pres)
# subset comments to those from an icpsr-matched member of Congress
comments1 <- comments %>% drop_na(icpsr)
# select variables we want from members
members_selected <- members %>% select(icpsr, chamber, congress, nominate.dim1, nominate_pres, nominate_diff, party_name)
comments1 %<>% left_join(members_selected)
## Joining, by = c("congress", "chamber", "icpsr")
# Create agency variable from id
comments1 %<>% mutate(Agency = str_remove(id, "-.*") )
# Inspect agencies
# unique(d$Agency)
# Join in agency independence scores
comments1 %<>% left_join(agency_data, by = "Agency", copy = FALSE)
# Subset to agencies for this study
comments2 <- comments1 %>% filter(!is.na(Estimate))
# inspect agencies
# unique(comments2$Agency)
comments2 %>%
select(congress, bioname, icpsr, party_name, nominate.dim1, nominate_pres, nominate_diff, Agency, Estimate)
## # A tibble: 370 × 9
## congress bioname icpsr party_name nominate.dim1 nominate_pres nominate_diff
## <dbl> <chr> <dbl> <chr> <dbl> <dbl> <dbl>
## 1 112 MALONEY,… 29379 Democrati… -0.387 -0.358 -0.0290
## 2 112 McCASKIL… 40701 Democrati… -0.143 -0.358 0.215
## 3 112 HEINRICH… 20930 Democrati… -0.325 -0.358 0.0330
## 4 113 MENENDEZ… 29373 Democrati… -0.365 -0.358 -0.00700
## 5 112 GRAVES, … 20124 Republica… 0.442 -0.358 0.8
## 6 113 STIVERS,… 21163 Republica… 0.299 -0.358 0.657
## 7 113 PERLMUTT… 20705 Democrati… -0.282 -0.358 0.076
## 8 112 HINOJOSA… 29763 Democrati… -0.34 -0.358 0.0180
## 9 113 BOXER, B… 15011 Democrati… -0.45 -0.358 -0.092
## 10 113 WARREN, … 41301 Democrati… -0.77 -0.358 -0.412
## # … with 360 more rows, and 2 more variables: Agency <chr>, Estimate <dbl>
# scatterplots
ggplot(comments2) +
aes(x = nominate_diff, y = success) +
geom_jitter(aes(alpha = congress))
## Warning: Removed 346 rows containing missing values (geom_point).
# add color by Gender, and save plot as p for future use
p <- ggplot(comments2) +
aes(x = nominate_diff, y = success, color = party_name) +
geom_jitter(aes(alpha = congress)) + scale_color_discrete()
p
## Warning: Removed 346 rows containing missing values (geom_point).
model <- lm(success ~ nominate_diff, data = comments2)
m <- model %>%
tidy(conf.int = TRUE)
m
## # A tibble: 2 × 7
## term estimate std.error statistic p.value conf.low conf.high
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) -0.282 0.411 -0.686 0.500 -1.13 0.570
## 2 nominate_diff 1.10 0.656 1.68 0.107 -0.257 2.46
ggplot(m %>% filter(term != "(Intercept)")) +
aes(x = term,
y = estimate,
ymin = conf.low,
ymax = conf.high) +
geom_pointrange() +
geom_hline(yintercept = 0, color = "grey") +
coord_flip() +
labs(x="", y="OLS Estimate")
m$.fitted
## Warning: Unknown or uninitialised column: `.fitted`.
## NULL
# illustrating with yhat formula; more easily done with augment()
b0 <- m$estimate[1]
b1 <- m$estimate[2]
p +
geom_line(aes(
y = b0 + b1*1) ) +
geom_line(aes(
y = b0 + b1*0) ) +
geom_ribbon(aes(ymax = b0 + b1*1,
ymin = b0 + b1*0), alpha = .1, color = NA)
## Warning: Removed 346 rows containing missing values (geom_point).
## Warning: Removed 37 row(s) containing missing values (geom_path).
## Removed 37 row(s) containing missing values (geom_path).
ggplot(comments2, aes(x = success)) + geom_histogram()+ labs(x = "Congressional Success")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## Warning: Removed 346 rows containing non-finite values (stat_bin).
ggplot(comments2, aes(x = Estimate)) + geom_histogram()+ labs(x = "Independence")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
ggplot(comments2, aes(x = nominate_diff)) + geom_histogram()+ labs(x = "Diff in nominate scores")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## Warning: Removed 37 rows containing non-finite values (stat_bin).
### Multiple linear regression \(y_i = \beta_0 + \beta_1*nominate_i + \beta_2*independence_i + \epsilon_i\)
model_1 <- lm(success ~ nominate_diff + Estimate, data = comments2)
augment()
## # A tibble: 0 × 0
summary(model_1)
##
## Call:
## lm(formula = success ~ nominate_diff + Estimate, data = comments2)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.6768 -0.8271 0.2129 1.1753 2.3833
##
## Coefficients: (1 not defined because of singularities)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -0.2818 0.4109 -0.686 0.500
## nominate_diff 1.1031 0.6560 1.681 0.107
## Estimate NA NA NA NA
##
## Residual standard error: 1.419 on 22 degrees of freedom
## (346 observations deleted due to missingness)
## Multiple R-squared: 0.1139, Adjusted R-squared: 0.07359
## F-statistic: 2.827 on 1 and 22 DF, p-value: 0.1068
m1 <- model_1 %>%
tidy(conf.int = TRUE)
m1
## # A tibble: 3 × 7
## term estimate std.error statistic p.value conf.low conf.high
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) -0.282 0.411 -0.686 0.500 -1.13 0.570
## 2 nominate_diff 1.10 0.656 1.68 0.107 -0.257 2.46
## 3 Estimate NA NA NA NA NA NA
ggplot(m1 %>% filter(term != "(Intercept)")) +
aes(x = term,
y = estimate,
ymin = conf.low,
ymax = conf.high) +
geom_pointrange() +
geom_hline(yintercept = 0, color = "grey") +
coord_flip() +
labs(x="", y="OLS Estimate")
## Warning: Removed 1 rows containing missing values (geom_pointrange).
model_2 <- lm(congressional_success ~ nominate_diff, data = comments2)
m2 <- augment(model_2)
ggplot(d) +
aes(x = nominate_diff, y = congressional_success) +
geom_point() + scale_color_discrete() +
geom_line(aes(y = m2$.fitted)) + # with .fitted from augment()
geom_point(aes(y = m2$.fitted), shape = 1, alpha = .2) + # with .fitted from augment()
geom_segment(aes(xend = nominate_diff, yend = m2$.fitted ), alpha = .2, size = 2)
\(y_i = \beta_0 + \beta_1*nominate_diff_i + \beta_2*independence_i + \epsilon_i\)
\(y_i = \beta_0 + \beta_1*nominate_diff_i + \beta_2*independence_i +\beta_3*independence_iXnominate_diff_1 + \epsilon_i\)
Wendy E. Wagner describes a salient issue within the rulemaking process, wherein well-funded interest groups attempt to overwhelm the decision making process in a regulatory context. She states “information capture refers to the excessive use of information and related information costs as a means of gaining control over regulatory decision-making in informal rulemakings.” In her study, she describes excessive use of telephone calls, emails, memorandas, and petitions of appeal to bombard overstretched agency staff, and therefore forces them to disregard their own expert opinions on the rule, as well as an inability to properly process other sides of the issue. As congresspeople are often considered to be a part of well-funded interest groups, especially taking into consideration that many congresspeople jointly write letters to petition for or against an issue. As such, this may skew the overall success of the Congressmembers’ demands. Even if congressmembers do not employ information capture to influence the success of their demands, they may be associated with the interest groups that do. It is important to create this distinction as the results are dependant on how rulemaking agencies consider congressmembers’ asks and their own levels of federal independence, rather than the tactics used by an interest group.
A federal agencies levels of independence does not end up being the biggest influence on whether a demand is fulfilled, rather its a failure for interest groups to self-process information they give agencies, i.e. filter failure. Even agencies with a high level of independence and have a commited to openness and transperency will not be able to avoid the issue of information capture. Regulatory solutions to a diverse amount of interest groups is no longer possible, given that much of the information is buried within mountains of information. Wagner continues, “once excessive information begins to gum up the works, simple fixes are no longer possible. Radical institutional overhaul becomes the only viable remedy.” The idea of information theory underscores the idea of ensuring communications to large governing bodies such as regulatory agencies is efficient and effective encapsulates the issue for all interested parties on a given rule.
The financial rulemaking process is often opaque and difficult to understand, which exacerbates the issue. Well-funded groups, paired with congressional pressure, imploy extremely technical language as well overly detailed arguments for unrelated aspects of the rule, therefore gumming up the rulemaking process. Wagner claims “To preserve issues for litigation, affected parties are thus best-advised to provide comments2 that are specific, detailed, and well documented. This seemingly reasonable requirement for specificity again encourages interested parties to provide too much documentation, too many specifics, and too much detail, rather than too little.” Given there are increasing information costs for overly technical comments, this becomes the basis of well-funded interest groups’ comment issuing process. Further, increasing amounts of direct partipation with the federal agencies may cause the federal agencies to increasingly rely on the experts within the interest groups, rather than retaining their own experts and objective opinions.
Wagner provides several means of avoiding the information capture process, including an increase the civic participation of the agency staff, as effective agency leadership and an awareness of the information capture process is powerful towards maintaining objectivity. However, she postulates that it is not enough, as agency staff “will face an uphill legal battle to surmount all of the one-sided pressures.” Moreover, hiring of agency staff must be trasparent and enable only those without a strong favoring for a certain interest group, which is a difficult and nebulous statement. Wagner also suggests providing the public with “a number of legislative and executive innovations, such as cost-benefit analysis” in order to allow the information to be more digestible for a layperson as well as an opportunity to focus on different implications of the rule.
Interestingly, Wagner also provides a mechnaism that allows congressmembers’ to alleviate, rather than contribute, to the issue. While civic-minded agency staff is poweful, civic-minded beaurocrats within Congress and the White House is more powerful. Drafting regulations to limit the level of information distributed may be effective in preventing the information capture process. However, it is then imperative for congressmembers’ to maintain objectivity within the rulemkaing process, as it stands this is not the case as congressmembers’ often issue public comments with their own opinions on the matter. This would require a complete overhaul of the rulemaking process as it stands, as well as additional judicial reviews and oversight boards to maintain politicians’ objectivity.